1 | ➜ binaries sudo ./hikey_idt -c config |
Khadas VIM3
What a sunny day after the FIRST snow in this winter. Let me show you 3 pictures in the first row, and 3 videos in the second row. We need to enjoy both R&D and life…
| Green Timers Lake 1 | Green Timers Lake 2 | Green Timers Park |
|---|---|---|
![]() |
![]() |
![]() |
| A Pair of Swans | A Group of Ducks | A Little Stream In The Snow |
After a brief break, I started investigating Khadas VIM3 again.
1. About Khadas VIM3
Khadas VIM3 is a super computer based on Amlogic A311D. Before we start, let’s carry out several simple comparisons.
1.1 Raspberry Pi 4 Model B vs. Khadas VIM3 vs. Jetson Nano Developer Kit
Please refer to hackerboards comparisons.
1.2 Amlogic A311D & S922X-B vs. Rockchip RK3399 (Pro) vs. Amlogic S912
Please refer to:
2. Ubuntu XFCE for Khadas VIM3
Let’s try out Ubuntu XFCE on Khadas VIM3.
2.1 Ubuntu XFCE Installation
As of today, I just downloaded VIM3_Ubuntu-xfce-bionic_Linux-5.5-rc2_arm64_SD-USB_V0.8.2-20200103.img, and flash it onto a TF card as:
1 | ➜ khadas sudo dd bs=4M if=VIM3_Ubuntu-xfce-bionic_Linux-5.5-rc2_arm64_SD-USB_V0.8.2-20200103.img of=/dev/mmcblk0 conv=fsync |
2.2 SSH Into Khadas VIM3
1 | ➜ ~ ssh khadas@192.168.1.95 |
1 | ...... |
3. OpenVINO Installation
Let’s just follow this Intel official documentation Arm* 64 Single Board Computers and the Intel® Neural Compute Stick 2 (Intel® NCS 2).
Google Coral
First snow in 2020. Actually, it is ALSO the FIRST snow for the winter from 2019 to 2020.
| First Snow 1 | First Snow 2 | First Snow 3 |
|---|---|---|
![]() |
![]() |
![]() |
Both my son and the Chinese New Year are coming. Let’s start the mode of celebrating. Today, I’m going to do the hotpot.
| Hotpot 1 | Hotpot 2 | Hotpot 3 |
|---|---|---|
![]() |
![]() |
![]() |
It looks in 1-day time, everybody is doing the edge computing. Today, we’re going to have some fun of Google Coral.
1. Google Coral USB Accelerator
Image cited from Coral official website.

To try out Google Coral USB Accelerator is comparitively simple. The ONLY thing to do is just to follow Google Doc - Get started with the USB Accelerator. Anyway, let’s test it out with the following commands.
Make sure we are able to list the device.
1 | ➜ classification git:(master) ✗ lsusb |
We then run the example.
1 | ➜ classification git:(master) ✗ python3 classify_image.py \ |
BTW, I’m going to discuss Google Coral TPU, Intel Movidius VPU and Cambricon NPU which has been adopted in HuaWei Hikey 970 and Rockchip 3399 Pro, sooner or later. Just keep an eye on my blog.
2. Google Coral Dev Board
In the following, we’re going to disscuss Google Coral Dev Board more. Image cited from Coral official website.

2.1 Mendel Installation
2.1.1 Mendel Linux Preparation
Google Corel Mendel Linux can be downloaded from https://coral.ai/software/. In our case, we are going to try Mendel Linux 4.0.
2.1.2 Connect Dev Board
On the host, we should be able to see:
1 | ➜ mendel-enterprise-day-13 lsusb |
Now what you see is a black screen. After having connected the Type C power cable, we should be able to see:
1 | power_bd71837_init |
That is what’s printed from Google Coral Dev Board. After having connected the Type C OTG cable, we should be able to see on the host:
1 | ➜ mendel-enterprise-day-13 fastboot devices |
2.1.3 Flash Corel Dev Board
1 | ➜ mendel-enterprise-day-13 ls |
You will be able to see Google Coral Dev Board is NOW connected. If you don’t see the EXPECTED output wishful-orange (192.168.101.2), just plug out and plug in the Type C power cable again.
2.1.4 Boot Mendel
Unfortunately, mdt tool does NOT work properly.
1 | ➜ mendel-enterprise-day-13 mdt shell |
This bug has been clarified on StackOverflow. By modifying file vim $HOME/.local/lib/python3.6/site-packages/mdt/sshclient.py line 86, from if not self.address.startswith('192.168.100'): to if not self.address.startswith('192.168.10'):, problem solved.
1 | ➜ mendel-enterprise-day-13 mdt shell |
After activate the Internet by nmtui, we can NOW clearly see the wlan0 IP is automatically allocated.
1 | mendel@wishful-orange:~$ ip -c address |
Of course, we can setup a static IP for this particular Google Coral Dev Board afterwards.
2.1.5 SSH into Mendel
In order to SSH into Mendel and connect remotely, we need to do Connect to a board’s shell on the host computer. You MUST pushkey before you can ssh into the board via the Internet IP instead of the virtual IP via USB, say 192.168.100.2 or 192.168.101.2.
2.2 Demonstration
2.2.1 edgetpu_demo --device & edgetpu_demo --stream
Let’s ignore edgetpu_demo --device for I ALMOST NEVER work with a GUI mode. The demo video is on my youtube channel, please refer to:
On console, it just displays as:
1 | mendel@deft-orange:~$ edgetpu_demo --stream |
2.2.2 Classification
1 | mendel@wishful-orange:~/Downloads/tflite/python/examples/classification$ python3 classify_image.py \ |
2.2.3 Camera
2.2.3.1 Google Coral camera
The Google Coral camera can be detected as a video device:
1 | mendel@wishful-orange:~$ v4l2-ctl --list-formats-ext --device /dev/video0 |
2.2.3.2 Face Detection Using Google TPU
My youtube real-time face detection video clearly shows Google TPU is seriously powerful.
On console, it displays:
1 | mendel@deft-orange:~$ edgetpu_detect_server \ |
OpenVINO on Raspberry Pi 4 with Movidius Neural Compute Stick II
Merry Christmas and happy new year everybody. I’ve been back to Vancouver for several days. These days, I’m updating this blog FIRSTLY written in September 2019. 2020 is coming, and we’re getting 1 year older. A kind of sad hum?
Okay… No matter what, let’s enjoy the song first: WE ARE YOUNG. Today, I joined Free Software Foundation and start my journey of supporting Open Source Software BY CASH. For me, it’s not about poverity or richness. It’s ALL about FAITH.
To write something about Raspberry Pi is to say GOOD BYE to my Raspberry Pi 3B, and WELCOME Raspberry Pi 4 at the same time. Our target today is to build an AI edge computing end as the following Youtube video:
1. About Raspberry Pi 4
1.1 Raspberry Pi 4 vs. Raspberry Pi 3B+
Before we start, let’s carry out a simple comparison between Raspberry Pi 4 and Raspberry Pi 3B+.
1.2 Raspbian Installation
1 | ➜ raspbian sudo dd bs=4M if=2019-09-26-raspbian-buster.img of=/dev/mmcblk0 conv=fsync |
1.2 BCM2711 is detected as BCM2835
1 | pi@raspberrypi:~ $ cat /proc/cpuinfo |
This issue seems to be a well-known bug. Raspberry Pi 4’s specification can be retrieved from The MagPi Magazine. More details about the development history of Raspberry Pi can be found on Wikipedia.
2. Movidius Neural Compute Stick on Raspberry Pi 4
Then, we just follow the following 2 blogs Run NCS Applications on Raspberry Pi and Adding AI to the Raspberry Pi with the Movidius Neural Compute Stick to test out Intel Movidius Neural Compute Stick 2:
| Intel Movidius Neural Compute Stick 2 | Intel Movidius Neural Compute Stick 1 |
|---|---|
![]() |
![]() |
- Intel Movidius Neural Compute Stick 1 is NOT listed on Intel’s official website any more. But github support for Intel Movidius Neural Compute Stick 1 can be found at https://github.com/movidius/ncsdk.
2.1 NCSDK Installation
We FIRST need to have ncsdk installed. Yup, here, as described in Run NCS Applications on Raspberry Pi, we carry out the installation directly under folder ...../ncsdk/api/src.
1 | ➜ src git:(ncsdk2) ✗ make -j4 |
2.2 Test NCSDK Example Apps
1 | ➜ python hello_ncs.py |
2.3 mvnc Python Package
1 | pi@raspberrypi:~ $ python |
3. Transitioning from Intel Movidius Neural Compute SDK to Intel OpenVINO
By following Intel’s official documentation Transitioning from Intel® Movidius™ Neural Compute SDK to Intel® Distribution of OpenVINO™ toolkit, we are transitioning to OpenVINO, which supports both Intel NCS 2 and the original NCS.
3.1 Install OpenVINO for Raspbian
For the installation details of OpenVINO, please refer to the following 2 documentations:
We now extract the MOST up-to-date l_openvino_toolkit_runtime_raspbian_p_2019.3.334.tgz under folder /opt/intel/openvino. Let’s take a brief look at this folder:
1 | pi@raspberrypi:/opt/intel/openvino $ ls |
Clearly, by comparing with OpenVINO™ Toolkit - Deep Learning Deployment Toolkit repository, we know that the open source version of deployment_tools contains some more content than the trimmed version for Raspbian. We’ll use model-optimizer for sure. Therefore, we checked out dldt, and put two subfolders model-optimizer and tools under folder /opt/intel/openvino/deployment_tools.
1 | pi@raspberrypi:/opt/intel/openvino/deployment_tools $ ls |
3.2 Build OpenVINO Samples
Before start building OpenVINO samples, please have OpenCV built from source and installed.
Note: Be sure to enable -DCMAKE_CXX_FLAGS=’-march=armv7-a’ while building dldt/samples.
Those samples are ALL for C++. For Python samples, you may have to figure out where to put openvino and where should those .so files put under. Please refer to Intel Forum Issue 810084 for the solutions to my problem. Anyway, the final step is to test as follows:
1 | pi@raspberrypi:/opt/intel/openvino/inference_engine/samples/python_samples $ python hello_query_device/hello_query_device.py |
3.3 OpenVINO Test
Please refer to Device-specific Plugin Libraries for ALL possible device types.
3.3.1 Object Detection
We then download two model files as given in the blog Install OpenVINO™ toolkit for Raspbian* OS.
Afterwards, start running object_detection_sample_ssd as follows:
1 | pi@raspberrypi:/opt/intel/openvino/inference_engine/samples/build $ ./armv7l/Release/object_detection_sample_ssd -m face-detection-adas-0001.xml -d MYRIAD -i me.jpg |
| me.jpg | out_0.bmp |
|---|---|
![]() |
![]() |
3.3.2 Image Classification
In this section, we are going to test out another OpenVINO example: Image Classification C++ Sample Async. After reading this blog, it wouldn’t be hard for us to notice the MOST important thing we’re missing here is the model file alexnet_fp32.xml. Let’s just keep it in mind for now.
And, let’s review a bit about our previous example: Object Detection – we downloaded face-detection-adas-0001 model from online and use it directly. So, questions:
- Are we able to download alexnet_fp32.xml from online this time again?
- Where can we download a whole bunch of open source models?
3.3.2.1 Open Model Zoo
It wouldn’t be hard for us to google out OpenVINO™ Toolkit - Open Model Zoo repository, under which model face-detection-adas-0001 is just sitting there. However, face-detection-adas-0001.xml and face-detection-adas-0001.bin are missing.
1 | $ ls face-detection-adas-0001/ |
Let’s checkout open_model_zoo and put it under folder /opt/intel/openvino/deployment_tools.
1 | pi@raspberrypi:/opt/intel/openvino/deployment_tools $ ls |
It’s seems that each folder under intel and public contains the detailed info of each model. For instance, file intel/face-detection-adas-0001/model.yml contains all the info about model face-detection-adas-0001.
However, what if ONLY caffe model is provided? In order to use those models on movidius, we need to generate two files, one .xml and one .bin, which are optimized for movidius, from some paticular caffe models. Intel OpenVINO toolkit issue 798441 provided a way to generate these two files.
3.3.2.2 Download Caffe Model Files
1 | pi@raspberrypi:/opt/intel/openvino/deployment_tools $ cd ./open_model_zoo/tools/downloader/ |
Three files including a large model file alexnet.caffemodel has been downloaded.
1 | pi@raspberrypi:/opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader $ ll public/alexnet/ |
3.3.2.3 Model Optimization
We then need to optimize the downloaded caffe model and make it feedable to OpenVINO.
1 | pi@raspberrypi:/opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader $ cd ../../../model_optimizer/ |
Note: You may meet the following ERRORs during model optimization.
1 | [ ERROR ] |
Clearly, for networkx, the ERROR message is a kind of ridiculous.
Anyway, if you meet the above 2 errors, please DOWNGRADE your packages as follows:
1 | pi@raspberrypi:/opt/intel/openvino/deployment_tools/model_optimizer $ pip install protobuf==3.6.1 --user |
3.3.2.4 Run The Sample
1 | pi@raspberrypi:/opt/intel/openvino/deployment_tools/inference_engine/samples/build $ ./armv7l/Release/classification_sample_async -i ./me.jpg -m /opt/intel/openvino/deployment_tools/model_optimizer/alexnet.xml -nt 5 -d HETERO:MYRIAD |
4. OpenCV DNN with OpenVINO’s Inference Engine
For the example openvino_fd_myriad.py given on Install OpenVINO™ toolkit for Raspbian* OS, the TOUGH thing is how to build [OpenCV])(https://opencv.org/) with OpenVINO’s Inference Engine. More or less, it’s a kind of complicated.
4.1 Intel64
On my laptop, of course, we are building OpenCV for architecture Intel64, with NVidia GPU + CUDA support, for DNN in OpenCV requires either CUDA or OpenCL.
The key file modified by me is opencv/cmake/OpenCVDetectInferenceEngine.cmake, as follows:
1 | # The script detects Intel(R) Inference Engine installation |
And, my test result of openvino_fd_myriad.py shows the performance of adopted model face-detection-adas-0001 is NOT as good as expected.
1 | import cv2 as cv |
| Parents At First Starbucks | Detected Faces |
|---|---|
![]() |
![]() |
4.2 armv7l
OpenCV dnn net.forward() fails
1 | pi@raspberrypi:/opt/intel/openvino/deployment_tools/inference_engine/samples/build $ python openvino_fd_myriad.py |
5. My Built Raspbian ISO With OpenCV4 + OpenVINO
You are welcome to try my built image rpi4-raspbian-opencv4-openvino-ncsdk.img, which is about 10G and composed of:
Everything has ALREADY been updated and built successfully.
Detectron2
The autumn in both Vancouver and Seattle is gorgeous…
| Overview Seattle on Space Needle | Around Space Needle |
|---|---|
![]() |
![]() |
| Space Needle | So Heavy |
![]() |
![]() |
| Ferris Wheel | Pier 55 |
![]() |
![]() |
| Vancouver Maple | UBC Poisonous Mushroom |
|---|---|
![]() |
![]() |
Alright, let’s rapidly test Detectron2.
Installation is detailedly summarized in INSTALL.md.
We can simply follow GETTING_STARTED.md for some simple demonstrations. Make sure you’ve downloaded the demo pictures from Detectron1 demo and save under Detectron2’s folder demo.
1 | longervision-GT72-6QE% python demo/demo.py --config-file configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \ |
Let’s take a look at the result:

And of course, my pictures taken in Seattle and Vancouver:
| Overview Seattle on Space Needle | Around Space Needle |
|---|---|
![]() |
![]() |
| Space Needle | So Heavy |
![]() |
![]() |
| Ferris Wheel | Pier 55 |
![]() |
![]() |
![]() |
![]() |
Oooooops… it seems there are still quite a lot of ERROR detections…
Anyway… tomorrow, 热干面. Google translator: Hot Noodles with Sesame Paste…
Tensorflow 2.0
Flying back to China soon. Before leaving, here comes a simple blog testing Tensorflow 2.0. In this blog, I strictly follow Amita Kapoor and Ajit Jaokar’s free book Getting Started with TensorFlow 2.0.
For simplicity, let’s try it out directly:
1. Tensorflow 2.0
2. Tensorflow Dataset
Build Toolchain Using Crosstool-NG
Today, we’re going to build our own toolchain using crosstool-NG. There are so many cases that we want to build a particular toolchain for a specific embedded system development board. One MOST important reason for that is probably because the specific development board has limited resource to build upon, therefore, the time-consumption to build some software on the development board is VERY inefficient. In the other way around, cross compiling on the host PC is MORE efficient with respective to the time-consumption.
In this blog, for simplicity, we take Raspberry Pi 3B as our demo development board, for which we are building the cross compiler. Some references can be found at:
1. Installation
How to install crosstool-NG are thoroughly summarized at its official website. In my case, I had it installed under folder /opt/compilers/crosstool-ng. Let’s take a look:
1 | longervision-GT72-6QE% pwd |
And make sure /opt/compilers/crosstool-ng/bin is under environment variable PATH.
2. Configuration
Under any directory that you want to save your .config file, we can configure our target cross compiler.
1 | longervision-GT72-6QE% ct-ng menuconfig |

According to elinux: RPi Linaro GCC Compilation, we need to do the following selections:
-
Paths and misc options:
- Try features marked as EXPERIMENTAL: ticked
- Prefix directory: input the full path where you want to save the built toolchains
- Number of parallel jobs: 4. Sorry that I’ve NO idea if this is the number of cores on Raspberry Pi 3B, but Raspberry Pi 3B does have 4 cores.
-
Target options:
- Target Architecture: arm
- Default instruction set mode: arm
- Use EABI: ticked
- append
hfto the tuple (EXPERIMENTAL): ticked - Endianness: Little endian
- Bitness: 32-bit
- Emit assembly for CPU: cortex-a53
- Use specific FPU: vfp
- Floating point: hardware (FPU)
-
Toolchain options:
- Tuple’s vendor string: rpi
-
Operating System
- Target OS: linux
- Version of Linux: 4.20.8. Currently, I flashed Raspbian Buster with desktop and recommended software, 2019-07-10 onto my Raspberry Pi 3B, it comes with kernel of version 4.19.66, which is NOT in the list. That’s why I decided to select 4.20.8 and try my luck.
1
2➜ ~ uname -a
Linux raspberrypi 4.19.66-v7+ #1253 SMP Thu Aug 15 11:49:46 BST 2019 armv7l GNU/Linux -
Binary utilities:
- Binary format: ELF
- Version of binutils: 2.31.1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19➜ ~ apt show binutils
Package: binutils
Version: 2.31.1-16+rpi1
Priority: optional
Section: devel
Maintainer: Matthias Klose <doko@debian.org>
Installed-Size: 95.2 kB
Provides: binutils-gold, elf-binutils
Depends: binutils-common (= 2.31.1-16+rpi1), libbinutils (= 2.31.1-16+rpi1), binutils-arm-linux-gnueabihf (= 2.31.1-16+rpi1)
Suggests: binutils-doc (>= 2.31.1-16+rpi1)
Conflicts: binutils-multiarch (<< 2.27-8), modutils (<< 2.4.19-1)
Homepage: https://www.gnu.org/software/binutils/
Download-Size: 56.9 kB
APT-Manual-Installed: no
APT-Sources: http://raspbian.raspberrypi.org/raspbian buster/main armhf Packages
Description: GNU assembler, linker and binary utilities
The programs in this package are used to assemble, link and manipulate
binary and object files. They may be used in conjunction with a compiler
and various libraries to build programs. -
C-library:
- C library: glibc
- Version of glibc: 2.28
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25➜ ~ apt show libc-bin
Package: libc-bin
Version: 2.28-10+rpi1
Priority: required
Essential: yes
Section: libs
Source: glibc
Maintainer: GNU Libc Maintainers <debian-glibc@lists.debian.org>
Installed-Size: 3,015 kB
Depends: libc6 (>> 2.28), libc6 (<< 2.29)
Recommends: manpages
Homepage: https://www.gnu.org/software/libc/libc.html
Download-Size: 657 kB
APT-Manual-Installed: yes
APT-Sources: http://raspbian.raspberrypi.org/raspbian buster/main armhf Packages
Description: GNU C Library: Binaries
This package contains utility programs related to the GNU C Library.
.
* catchsegv: catch segmentation faults in programs
* getconf: query system configuration variables
* getent: get entries from administrative databases
* iconv, iconvconfig: convert between character encodings
* ldd, ldconfig: print/configure shared library dependencies
* locale, localedef: show/generate locale definitions
* tzselect, zdump, zic: select/dump/compile time zones -
C compiler:
- Show gcc versions from: GNU
- Version of gcc: 8.3.0
- gcc extra config: --with-float=hard
- Link libstdc++ statically into the gcc binary: tick
- C++: tick
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20➜ ~ apt show gcc
Package: gcc
Version: 4:8.3.0-1+rpi2
Priority: optional
Section: devel
Source: gcc-defaults (1.181+rpi2)
Maintainer: Debian GCC Maintainers <debian-gcc@lists.debian.org>
Installed-Size: 46.1 kB
Provides: c-compiler, gcc-arm-linux-gnueabihf (= 4:8.3.0-1+rpi2)
Depends: cpp (= 4:8.3.0-1+rpi2), gcc-8 (>= 8.3.0-1~)
Recommends: libc6-dev | libc-dev
Suggests: gcc-multilib, make, manpages-dev, autoconf, automake, libtool, flex, bison, gdb, gcc-doc
Conflicts: gcc-doc (<< 1:2.95.3)
Download-Size: 5,200 B
APT-Manual-Installed: no
APT-Sources: http://raspbian.raspberrypi.org/raspbian buster/main armhf Packages
Description: GNU C compiler
This is the GNU C compiler, a fairly portable optimizing compiler for C.
.
This is a dependency package providing the default GNU C compiler.
After we save the configuration to file .config, we Exit the crosstool-NG configuration dialog.
3. Build
Please remember to:
unset LD_LIBRARY_PATHbefore building. Otherwise, you’ll meet some ERROR messages.mkdir ~/srcbefore building. Otherwise, whenever you tried to rerunct-ng build, you’ll have to download ALL required packages from scratch.
1 | longervision-GT72-6QE% ct-ng build |
This process may take a while. I’m going to sleep tonight. Continue tomorrow…
Alright, let’s continue today. Glad to know it’s built successfully.
Let’s FIRST take a look at what’s under the current folder.
1 | longervision-GT72-6QE% ls |
And then, let’s take a look at what’s built under the specified destination folder.
1 | longervision-GT72-6QE% ls ~/....../CrossCompile/RPi |
Finally, let’s take a look at the version of our built cross compilers for Raspberry Pi 3B.
1 | longervision-GT72-6QE% arm-rpi-linux-gnueabihf-gcc --version |
Additional issue: It seems current crosstool-NG does NOT officially support Python. Please refer to my issue at crosstool-NG
4. Compile/Build a Package with Generated Toolchain
For most of the packages nowadays, they are
- either supported by make:
./configure -> make -> make install - or supported by CMake:
mkdir build -> cd build -> ccmake ../ -> make -> make install
How to use our generated toolchain to compile/build our target packages?
- For the FIRST option, you can follow the crosstool-NG Using the toolchain.
- For the SECOND option, you are welcome to follow CMake Toolchains
Today, we’re going to take the package flann as an example, which are to be built with CMake.
4.1 CMakeLists.txt Modification
4.1.1 CMake Toolchains - Cross Compiling for Linux
By following CMake Toolchains - Cross Compiling for Linux, we FIRST modify flann CMakeLists.txt a bit by adding the following lines before project(flann).
1 | cmake_minimum_required(VERSION 2.6) |
- CMAKE_SYSROOT specify the sysroot directory which emulate your target environment, here, Raspberry Pi 3B
- CMAKE_STAGING_PREFIX is where the STAGING results store, for the reason that final results may require to be built in multiple stages. You may refer to Linux From Scratch for further background knowledge about that.
- tools actually specify the building tool directory. Making sure all generated cross compiling tools are under folder
${tools}/bin.
4.1.2 Ignore hdf5
In addition, for the emulated Raspberry Pi 3B sysroot, hdf5 is NOT supported. Therefore, let’s simply comment out the following line in flann CMakeLists.txt.
1 | #find_hdf5() |
4.2 Cross Compile
Now, let’s start cross-compiling flann.
1 | longervision-GT72-6QE% mkdir build |
and press c and then t, you’ll see the cross compiling toolchains have been automatically configured as:
1 | BUILD_CUDA_LIB *OFF |
The LAST step before make is to modify some of the parameters accordingly, in my case:
- BUILD_PYTHON_BINDINGS: ON -> OFF
- CMAKE_CXX_FLAGS: -I/usr/include (for lz4.h from my host Ubuntu 19.04)
- CMAKE_C_FLAGS: -I/usr/include (for lz4.h from my host Ubuntu 19.04)
- CMAKE_VERBOSE_MAKEFILE: OFF -> ON
- CMAKE_INSTALL_PREFIX:
/usr/local/-> under which directory you want to install, which can be IGNORED for now
Now, press c, you’ll see
1 | CMake Warning at CMakeLists.txt:99 (message): |
A warining of missing hdf5 is clearly reasonable and acceptable. Then press g.
Finally, it’s the time for us to cross build flann.
1 | longervision-GT72-6QE% make -j8 |
VERY Important:
- If you build flann from source on a Raspberry Pi 3B, your system is going to hang HERE, possibly due to lack of memory..
- Raspberry Pi 3B has 4 cores ONLY, but now you can use MORE cores on your host PC, which is clearly one advantage of using cross compiling.
Finally, you’ll see flann has been successfully cross compiled as:
1 | ....../flann/src/cpp/flann/algorithms/autotuned_index.h: In member function 'void flann::AutotunedIndex<Distance>::optimizeKMeans(std::vector<flann::AutotunedIndex<Distance>::CostData>&) [with Distance = flann::L2<double>]': |
You can now:
make installto install the built/generated libraries installed under CMAKE_INSTALL_PREFIX- copy and paste the built/generated libraries onto Raspberry Pi 3B and use it directly.
BTW: do NOT forget to install the header files.
5. Additional Issues
Multilib/multiarch seems to be problematic nowadays. Please pay attention to Multilib/multiarch. Some related issues are enumuated as the end of this blog.
- Multilib caveats from Official Notes on specific toolchain features
- fatal error: gnu/stubs-hard.h: No such file or directory from crosstool-NG
- fatal error: gnu/stubs-hard.h: No such file or directory from gcc-cross-compiler
- Build error: “features.h: No such file or directory” from risc-v tools
- crosstool-NG risc-v linux multilib issue
- raspberrypi toolchain issue
Alright, that’s all for today. Let me go to bed. Good bye…
Stereo Vision on VCSBC nano Z-RH-2 - PART I
Today, we are going to talk about a fabulous project: stereo vision on a zynq-7010 board.
1. VCSBC nano Z-RH-2
1.1 Hardware
We are using a VCSBC nano Z-RH-2 board for today’s experiment. The board adopted looks like the following:
| Front | Back | Connector |
|---|---|---|
![]() |
![]() |
![]() |
More detailed specifications, please refer to Vision Components’s official website.
1.2 Software
After you set up a static IP for this Vision Components SBC, it’s pretty straightforward to ssh into the system.
1 | longervision-GT72-6QE% ssh user@192.168.1.79 |
Currently, VC provides Linux Kernel 3.14.79.
1 | user@VC-Z:~$ uname -a |
And, let’s take a look at the dual ARMv7 CPUs on zynq-7010.
1 | user@VC-Z:~$ cat /proc/cpuinfo |
2. Stereo Vision
Sorry everybody. Today, I ONLY test stereo vision on ARM. I’ll try to figure out how to flash an open source IP of stereo vision onto zynq-7010, or write my own ASAP.
Hmmmmmmmm… It’s better I keep my code in dark???
2.1 Classical Image Pairs
Let’s try out the stereo vision on some .pgm image pairs FIRST.
1 | user@VC-Z:~/longervision$ ./pgmpair ../images/aloe_left.pgm ../images/aloe_right.pgm |
My GOD… It’s UNBELIEVABLY SLOW.
| aloe_left | aloe_right |
|---|---|
![]() |
![]() |
| aloe_left Stereo | aloe_right Stereo |
![]() |
![]() |
2.2 Live Video Pairs
The BEST demo code to test Vision Componenets Stereo Vision is Eclipse_Example_Projects_VC_Z.
2.2.1 imageCaptureTest
1 | #./imageCaptureTest |
2.2.2 imageCaptFPS
1 | 03:41:47[root@VC-Z] /home/user/vc/Eclipse_Example_Projects_VC_Z/imageCaptFPS |
The above 2 examples are directly run on Vision Components’s board without display, for the given cable is of VGA connector, which is ALREADY outdated for many years. Therefore, in order to show the captured image pairs, in the next section, we’ll have to stream the captured data to a host computer, and display the real-time video pairs.
2.3 Stream/Display From Host Computer
Due to Vision Componenets’ Official documentation VCLinux_Getting_Started.pdf, images captured from camera can be displayed on a remote hosting PC.
2.3.1 Eclipse_Example_Projects_VC_Z (Not Preferred)
Eclipse_Example_Projects_VC_Z.zip provides some source code, which displays images captured from camera on the hosting PC’s Eclipse, by adding the camera as a Remote System to Eclipse.
Anyway, by using this method, you need to prepare ALL the following software and packages in advance.
- Eclipse
- CDT: install by Check for Updates with p2 software repository: http://download.eclipse.org/tools/cdt/releases/9.8
- Direct Remote C++ Debugging: installed from Eclipse Marketplace
2.3.2 vcimgnetclient.py
Besides the above method, a much more straightforward method is to adopt python script vcimgnetclient.py provided by Vision Components. However, vcimgnetclient.py is ONLY python2 compatible, and Vision Components has NO plan to provide a python3 compatible version of vcimgnetclient.py.
Therefore, the KEY to use the 2nd method is to make vcimgnetclient.py python3 compatible.
2.3.2.1 PyGTK to PyGTK3
1) 2to3
1 | longervision-GT72-6QE% 2to3 vcimgnetclient.py |
2) pygi-convert
Please refer to General Porting Tips
1 | longervision-GT72-6QE% ./pygi-convert.sh vcimgnetclient.py |
3) Try Running
1 | longervision-GT72-6QE% python vcimgnetclient.py |
4)
For ALL vbox.pack_start, add one parameter 0 at the end for each case. For instance:
vbox.pack_start(self.menu_bar, False, False)
to
vbox.pack_start(self.menu_bar, False, False, 0)
Oh my god … There are STILL SO MANY things to do, in order to make vcimgnetclient.py Python3 compatible. Therefore, I implement my own vcimgnetclient_qt.py based on PyQt5.
2.3.3 Longer Vision’s vcimgnetclient_qt.py
2.3.3.1 Server
1 | 22:42:22[root@VC-Z] /home/user |
2.3.3.2 Client
Sorry, I’m NOT going to show my code, but the performance can be demonstrated as follows:

Kaggle Competition - APTOS 2019 Blindness Detection
Oh-my-god, kaggle competition about diabetis… Hmmm, I have something to do these days… It’s also a GOOD chance for me to learn some medical terms… In this blog, we’re going to try out this kaggle competition - APTOS 2019 Blindness Detection.
1. Pre-processing Diabetic Retinopathy Images
It’s NOT hard for us to locate APTOS [UpdatedV14] Preprocessing- Ben’s & Cropping, which seems to me a professional snippet of code preprocessing diabetic retinopathy images. Let’s just try it out directly.
1.1 Implementation
The implementation is cited directly from APTOS [UpdatedV14] Preprocessing- Ben’s & Cropping, with trivial modifications. You are welcome to download my snippet of code here.





































